Goto

Collaborating Authors

 governance rule


Governable AI: Provable Safety Under Extreme Threat Models

Wang, Donglin, Liang, Weiyun, Chen, Chunyuan, Xu, Jing, Fu, Yulong

arXiv.org Artificial Intelligence

As AI rapidly advances, the security risks posed by AI are becoming increasingly severe, especially in critical scenarios, including those posing existential risks. If AI becomes uncontrollable, manipulated, or actively evades safety mechanisms, it could trigger systemic disasters. Existing AI safety approaches-such as model enhancement, value alignment, and human intervention-suffer from fundamental, in-principle limitations when facing AI with extreme motivations and unlimited intelligence, and cannot guarantee security. To address this challenge, we propose a Governable AI (GAI) framework that shifts from traditional internal constraints to externally enforced structural compliance based on cryptographic mechanisms that are computationally infeasible to break, even for future AI, under the defined threat model and well-established cryptographic assumptions.The GAI framework is composed of a simple yet reliable, fully deterministic, powerful, flexible, and general-purpose rule enforcement module (REM); governance rules; and a governable secure super-platform (GSSP) that offers end-to-end protection against compromise or subversion by AI. The decoupling of the governance rules and the technical platform further enables a feasible and generalizable technical pathway for the safety governance of AI. REM enforces the bottom line defined by governance rules, while GSSP ensures non-bypassability, tamper-resistance, and unforgeability to eliminate all identified attack vectors. This paper also presents a rigorous formal proof of the security properties of this mechanism and demonstrates its effectiveness through a prototype implementation evaluated in representative high-stakes scenarios.


Getting the most from your data-driven transformation: 10 key principles

MIT Technology Review

The importance of data to today's businesses can't be overstated. Studies show data-driven companies are 58% more likely to beat revenue goals than non-data-driven companies and 162% more likely to significantly outperform laggards. Data analytics are helping nearly half of all companies make better decisions about everything, from the products they deliver to the markets they target. Data is becoming critical in every industry, whether it's helping farms increase the value of the crops they produce or fundamentally changing the game of basketball. Used optimally, data is nothing less than a critically important asset. Problem is, it's not always easy to put data to work. The Seagate Rethink Data report, with research and analysis by IDC, found that only 32% of the data available to enterprises is ever used and the remaining 68% goes unleveraged.


Towards a computer-interpretable actionable formal model to encode data governance rules

Zhao, Rui, Atkinson, Malcolm

arXiv.org Artificial Intelligence

Towards a computer-interpretable actionable formal model to encode data governance rules Rui Zhao School of Informatics University of Edinburgh Edinburgh, UK s1623641@sms.ed.ac.uk Malcolm Atkinson School of Informatics University of Edinburgh Edinburgh, UK Malcolm.Atkinson@ed.ac.uk Abstract --With the needs of science and business, data sharing and reuse has become an intensive activity for various areas. In many cases, governance imposes rules concerning data use, but there is no existing computational technique to help data-users comply with such rules. We argue that intelligent systems can be used to improve the situation, by recording provenance records during processing, encoding the rules and performing reasoning. We present our initial work, designing formal models for data rules and flow rules and the reasoning system, as the first step towards helping data providers and data users sustain productive relationships. I NTRODUCTION Data ethics and privacy are of rising importance, especially with the establishment of GDPR [1]. Similar issues also apply in research when data from various sources are used as inputs to analyses and simulations. Researchers are aware that there are governance rules applied to the data, but they can easily lose track of the rules when the number of sources becomes large. The large volume of rules brings problem from three aspects: 1) to fully read and understand the rules; 2) to consider the consequence of combining data and their associate rules; 3) to assign rules to output so that results can be used compliantly. One response is to make data open and freely accessible (e.g. This sounds nice but it still leaves rules, for example to properly acknowledge sources and to protect personal and commercially sensitive data, even within collaborating communities [4]. This work has been accepted and should appear in the Proceedings of IEEE eScience 2019 Conference (BC2DC).